490 research outputs found

    Eigenvector Approximation Leading to Exponential Speedup of Quantum Eigenvalue Calculation

    Full text link
    We present an efficient method for preparing the initial state required by the eigenvalue approximation quantum algorithm of Abrams and Lloyd. Our method can be applied when solving continuous Hermitian eigenproblems, e.g., the Schroedinger equation, on a discrete grid. We start with a classically obtained eigenvector for a problem discretized on a coarse grid, and we efficiently construct, quantum mechanically, an approximation of the same eigenvector on a fine grid. We use this approximation as the initial state for the eigenvalue estimation algorithm, and show the relationship between its success probability and the size of the coarse grid.Comment: 4 page

    Adaptive Refinements in BEM

    Get PDF
    Accuracy estimates and adaptive refinements is nowadays one of the main research topics in finite element computations [6,7,8, 9,11].Its extension to Boundary Elements has been tried as a means to better understand its capabilities as well as to impro ve its efficiency and its obvious advantages. The possibility of implementing adaptive techniques was shown [1,2] for h-conver gence and p-convergence respectively. Some posterior works [3,4 5,10] have shown the promising results that can be expected from those techniques. The main difficulty is associated to the reasonable establishment of “estimation” and “indication” factors related to the global and local errors in each refinement. Although some global measures have been used it is clear that the reduction in dimension intrinsic to boundary elements (3D→2D: 2D→1D) could allow a direct comparison among residuals using the graphic possibilities of modern computers and allowing a point-to-point comparison in place of the classical global approaches. Nevertheless an indicator generalizing the well known Peano’s one has been produced

    A paradox in the approximation of Dirichlet control problems in curved domains

    Get PDF
    In this paper, we study the approximation of a Dirichlet control problem governed by an elliptic equation defined on a curved domain Ω. To solve this problem numerically, it is usually necessary to approximate Ω by a (typically polygonal) new domain Ωh. The difference between the solutions of both infinite-dimensional control problems, one formulated in Ω and the second in Ωh, was studied in [E. Casas and J. Sokolowski, SIAM J. Control Optim., 48 (2010), pp. 3746–3780], where an error of order O(h) was proved. In [K. Deckelnick, A. Gšunther, and M. Hinze, SIAM J. Control Optim., 48 (2009), pp. 2798–2819], the numerical approximation of the problem defined in Ω was considered. The authors used a finite element method such that Ωh was the polygon formed by the union of all triangles of the mesh of parameter h. They proved an error of order O(h3/2) for the difference between continuous and discrete optimal controls. Here we show that the estimate obtained in [E. Casas and J. Sokolowski, SIAM J. Control Optim., 48 (2010), pp. 3746–3780] cannot be improved, which leads to the paradox that the numerical solution is a better approximation of the optimal control than the exact one obtained just by changing the domain from Ω to Ωh

    Volumetric real-time particle-based representation of large unstructured tetrahedral polygon meshes

    No full text
    In this paper we propose a particle-based volume rendering approach for unstructured, three-dimensional, tetrahedral polygon meshes. We stochastically generate millions of particles per second and project them on the screen in real-time. In contrast to previous rendering techniques of tetrahedral volume meshes, our method does not need a prior depth sorting of geometry. Instead, the rendered image is generated by choosing particles closest to the camera. Furthermore, we use spatial superimposing. Each pixel is constructed from multiple subpixels. This approach not only increases projection accuracy, but allows also a combination of subpixels into one superpixel that creates the well-known translucency effect of volume rendering. We show that our method is fast enough for the visualization of unstructured three-dimensional grids with hard real-time constraints and that it scales well for a high number of particles

    High-Dimensional Stochastic Design Optimization by Adaptive-Sparse Polynomial Dimensional Decomposition

    Full text link
    This paper presents a novel adaptive-sparse polynomial dimensional decomposition (PDD) method for stochastic design optimization of complex systems. The method entails an adaptive-sparse PDD approximation of a high-dimensional stochastic response for statistical moment and reliability analyses; a novel integration of the adaptive-sparse PDD approximation and score functions for estimating the first-order design sensitivities of the statistical moments and failure probability; and standard gradient-based optimization algorithms. New analytical formulae are presented for the design sensitivities that are simultaneously determined along with the moments or the failure probability. Numerical results stemming from mathematical functions indicate that the new method provides more computationally efficient design solutions than the existing methods. Finally, stochastic shape optimization of a jet engine bracket with 79 variables was performed, demonstrating the power of the new method to tackle practical engineering problems.Comment: 18 pages, 2 figures, to appear in Sparse Grids and Applications--Stuttgart 2014, Lecture Notes in Computational Science and Engineering 109, edited by J. Garcke and D. Pfl\"{u}ger, Springer International Publishing, 201

    Theory and methodology for estimation and control of errors due to modeling, approximation, and uncertainty

    Get PDF
    The reliability of computer predictions of physical events depends on several factors: the mathematical model of the event, the numerical approximation of the model, and the random nature of data characterizing the model. This paper addresses the mathematical theories, algorithms, and results aimed at estimating and controlling modeling error, numerical approximation error, and error due to randomness in material coefficients and loads. A posteriori error estimates are derived and applications to problems in solid mechanics are presented. (C) 2004 Elsevier B.V. All rights reserved

    Reducing Uncertainties in a Wind-Tunnel Experiment using Bayesian Updating

    Full text link
    We perform a fully stochastic analysis of an experiment in aerodynamics. Given estimated uncertainties on the principle input parameters of the experiment, including uncertainties on the shape of the model, we apply uncertainty propagation methods to a suitable CFD model of the experimental setup. Thereby we predict the stochastic response of the measurements due to the experimental uncertainties. To reduce the variance of these uncertainties a Bayesian updating technique is employed in which the uncertain parameters are treated as calibration parameters, with priors taken as the original uncertainty estimates. Imprecise measurements of aerodynamic forces are used as observational data. Motivation and a concrete application come from a wind-tunnel experiment whose parameters and model geometry have substantial uncertainty. In this case the uncertainty was a consequence of a poorly constructed model in the pre-measurement phase. These methodological uncertainties lead to substantial uncertainties in the measurement of forces. Imprecise geometry measurements from multiple sources are used to create an improved stochastic model of the geometry. Calibration against lift and moment data then gives us estimates of the remaining parameters. The effectiveness of the procedure is demonstrated by prediction of drag with uncertainty
    • 

    corecore